DE eng

Search in the Catalogues and Directories

Page: 1 2 3
Hits 1 – 20 of 52

1
Probing for the Usage of Grammatical Number ...
BASE
Show details
2
On Homophony and Rényi Entropy ...
BASE
Show details
3
On Homophony and Rényi Entropy ...
BASE
Show details
4
On Homophony and Rényi Entropy ...
BASE
Show details
5
Finding Concept-specific Biases in Form--Meaning Associations ...
BASE
Show details
6
Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models ...
BASE
Show details
7
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
8
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
9
Modeling the Unigram Distribution ...
Abstract: Read paper: https://www.aclanthology.org/2021.findings-acl.326 Abstract: The unigram distribution is the non-contextual probability of finding a specific word form in a corpus. While of central importance to the study of language, it is commonly approximated by each word's sample frequency in the corpus. This approach, being highly dependent on sample size, assigns zero probability to any out-of-vocabulary (oov) word form. As a result, it produces negatively biased probabilities for any oov word form, while positively biased probabilities to in-corpus words. In this work, we argue in favor of properly modeling the unigram distribution---claiming it should be a central task in natural language processing. With this in mind, we present a novel model for estimating it in a language (a neuralization of Goldwater et al.'s (2011) model) and show it produces much better estimates across a diverse set of 7 languages than the naïive use of neural character-level language models. ...
URL: https://dx.doi.org/10.48448/fx5z-4a29
https://underline.io/lecture/26417-modeling-the-unigram-distribution
BASE
Hide details
10
A Bayesian Framework for Information-Theoretic Probing ...
BASE
Show details
11
A surprisal--duration trade-off across and within the world's languages ...
BASE
Show details
12
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
13
What About the Precedent: An Information-Theoretic Analysis of Common Law ...
BASE
Show details
14
Modeling the Unigram Distribution ...
BASE
Show details
15
Finding Concept-specific Biases in Form–Meaning Associations ...
BASE
Show details
16
How (Non-)Optimal is the Lexicon? ...
BASE
Show details
17
Disambiguatory Signals are Stronger in Word-initial Positions ...
BASE
Show details
18
Modeling the Unigram Distribution
In: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (2021)
BASE
Show details
19
What About the Precedent: An Information-Theoretic Analysis of Common Law
In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
BASE
Show details
20
Finding Concept-specific Biases in Form–Meaning Associations
In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
BASE
Show details

Page: 1 2 3

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
52
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern